首页> 外文OA文献 >Predicting Target Language CCG Supertags Improves Neural Machine Translation
【2h】

Predicting Target Language CCG Supertags Improves Neural Machine Translation

机译:预测目标语言CCG supertags改进神经机器翻译

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Neural machine translation (NMT) models are able to partially learn syntactic information from sequential lexical information. Still, some complex syntactic phenomena such as prepositional phrase attachment are poorly modeled. This work aims to answer two questions: 1) Does explicitly modeling target language syntax help NMT? 2) Is tight integration of words and syntax better than multitask training? We introduce syntactic information in the form of CCG super tags in the decoder, by interleaving the target super tags with the word sequence. Our results on WMT data show that explicitly modeling target syntaximproves machine translation quality for German!English, a high-resource pair, and for Romanian!English, a low resource pair and also several syntacticphenomena including prepositional phrase attachment. Furthermore, a tight coupling of words and syntax improves translation quality more than multitask training. By combining target-syntax with adding source-side dependency labels in the embedding layer, we obtain a total improvement of 0.9 BLEU for German!English and 1.2 BLEU for Romanian!English.
机译:神经机器翻译(NMT)模型能够从顺序词法信息中部分学习句法信息。但是,一些复杂的句法现象,例如介词短语的依附关系,仍然很难建模。这项工作旨在回答两个问题:1)明确建模目标语言语法是否对NMT有帮助? 2)单词和语法的紧密集成是否比多任务训练更好?我们通过将目标超级标签与单词序列交织,在解码器中以CCG超级标签的形式介绍语法信息。我们在WMT数据上的结果表明,对目标语法进行显式建模可以提高高资源对的German!English和低资源对的Romanian!English的机器翻译质量,还改善了包括介词短语附件在内的几种句法现象。此外,单词和语法的紧密结合比多任务训练更能提高翻译质量。通过将目标语法与在嵌入层中添加源端依赖项标签相结合,我们对德语!英语和罗马尼亚语!英语的总改进为0.9 BLEU,对罗马尼亚语的英语为1.2 BLEU。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号